Linear regression homework with Yelp votes

Introduction

This assignment uses a small subset of the data from Kaggle's Yelp Business Rating Prediction competition.

Description of the data:

  • yelp.json is the original format of the file. yelp.csv contains the same data, in a more convenient format. Both of the files are in this repo, so there is no need to download the data from the Kaggle website.
  • Each observation in this dataset is a review of a particular business by a particular user.
  • The "stars" column is the number of stars (1 through 5) assigned by the reviewer to the business. (Higher stars is better.) In other words, it is the rating of the business by the person who wrote the review.
  • The "cool" column is the number of "cool" votes this review received from other Yelp users. All reviews start with 0 "cool" votes, and there is no limit to how many "cool" votes a review can receive. In other words, it is a rating of the review itself, not a rating of the business.
  • The "useful" and "funny" columns are similar to the "cool" column.

Task 1

Read yelp.csv into a DataFrame.


In [5]:
# access yelp.csv using a relative path
import pandas as pd
import seaborn as sns
yelp = pd.read_csv('C:/Users/Joshuaw/Documents/GA_Data_Science/data/yelp.csv')
yelp.head()


Out[5]:
business_id date review_id stars text type user_id cool useful funny
0 9yKzy9PApeiPPOUJEtnvkg 2011-01-26 fWKvX83p0-ka4JS3dc6E5A 5 My wife took me here on my birthday for breakf... review rLtl8ZkDX5vH5nAx9C3q5Q 2 5 0
1 ZRJwVLyzEJq1VAihDhYiow 2011-07-27 IjZ33sJrzXqU-0X6U8NwyA 5 I have no idea why some people give bad review... review 0a2KyEL0d3Yb1V6aivbIuQ 0 0 0
2 6oRAC4uyJCsJl1X0WZpVSA 2012-06-14 IESLBzqUCLdSzSqm0eCSxQ 4 love the gyro plate. Rice is so good and I als... review 0hT2KtfLiobPvh6cDC8JQg 0 1 0
3 _1QQZuf4zZOyFCvXc0o6Vg 2010-05-27 G-WvGaISbqqaMHlNnByodA 5 Rosie, Dakota, and I LOVE Chaparral Dog Park!!... review uZetl9T0NcROGOyFfughhg 1 2 0
4 6ozycU1RpktNG2-1BroVtw 2012-01-05 1uJFq2r5QfJG_6ExMRCaGw 5 General Manager Scott Petello is a good egg!!!... review vYmM4KTsC8ZfQBg-j5MWkw 0 0 0

Task 1 (Bonus)

Ignore the yelp.csv file, and construct this DataFrame yourself from yelp.json. This involves reading the data into Python, decoding the JSON, converting it to a DataFrame, and adding individual columns for each of the vote types.


In [11]:
# read the data from yelp.json into a list of rows
# each row is decoded into a dictionary using using json.loads()
import json
import pandas as pd
import seaborn as sns
with open('C:/Users/Joshuaw/Documents/GA_Data_Science/data/yelp.json', 'rU') as f:
    data = [json.loads(row) for row in f]

In [3]:
# show the first review
data[0]


Out[3]:
{u'business_id': u'9yKzy9PApeiPPOUJEtnvkg',
 u'date': u'2011-01-26',
 u'review_id': u'fWKvX83p0-ka4JS3dc6E5A',
 u'stars': 5,
 u'text': u'My wife took me here on my birthday for breakfast and it was excellent.  The weather was perfect which made sitting outside overlooking their grounds an absolute pleasure.  Our waitress was excellent and our food arrived quickly on the semi-busy Saturday morning.  It looked like the place fills up pretty quickly so the earlier you get here the better.\n\nDo yourself a favor and get their Bloody Mary.  It was phenomenal and simply the best I\'ve ever had.  I\'m pretty sure they only use ingredients from their garden and blend them fresh when you order it.  It was amazing.\n\nWhile EVERYTHING on the menu looks excellent, I had the white truffle scrambled eggs vegetable skillet and it was tasty and delicious.  It came with 2 pieces of their griddled bread with was amazing and it absolutely made the meal complete.  It was the best "toast" I\'ve ever had.\n\nAnyway, I can\'t wait to go back!',
 u'type': u'review',
 u'user_id': u'rLtl8ZkDX5vH5nAx9C3q5Q',
 u'votes': {u'cool': 2, u'funny': 0, u'useful': 5}}

In [10]:
# convert the list of dictionaries to a DataFrame
#note that the cool, funny, and useful vote types were in a nested dictionary: u'votes': {u'cool': 2, u'funny': 0, u'useful': 5}
yelp = pd.DataFrame(data)
yelp.head(2)


Out[10]:
business_id date review_id stars text type user_id votes
0 9yKzy9PApeiPPOUJEtnvkg 2011-01-26 fWKvX83p0-ka4JS3dc6E5A 5 My wife took me here on my birthday for breakf... review rLtl8ZkDX5vH5nAx9C3q5Q {u'funny': 0, u'useful': 5, u'cool': 2}
1 ZRJwVLyzEJq1VAihDhYiow 2011-07-27 IjZ33sJrzXqU-0X6U8NwyA 5 I have no idea why some people give bad review... review 0a2KyEL0d3Yb1V6aivbIuQ {u'funny': 0, u'useful': 0, u'cool': 0}

In [12]:
# add DataFrame columns for cool, useful, and funny, do this by creating a new column for each column
#use list comprehension to pass the nested structure, remember it's as rows, e.g. row['votes]['cool] 
yelp['cool'] = [row['votes']['cool'] for row in data]
yelp['useful'] = [row['votes']['useful'] for row in data]
yelp['funny'] = [row['votes']['funny'] for row in data]

In [13]:
# drop the votes column
#since votes was a nested data type votes:cool, votes:funny, votes:useful, the votes column has all types of vote data
#making the data not so useful so drop it, remember axis=1 indicates that we want to drop a particular column, we would use 
#axis=0 to drop a specific row
yelp.drop('votes', axis=1, inplace=True)
yelp.head(1)


Out[13]:
business_id date review_id stars text type user_id cool useful funny
0 9yKzy9PApeiPPOUJEtnvkg 2011-01-26 fWKvX83p0-ka4JS3dc6E5A 5 My wife took me here on my birthday for breakf... review rLtl8ZkDX5vH5nAx9C3q5Q 2 5 0

Task 2

Explore the relationship between each of the vote types (cool/useful/funny) and the number of stars.


In [7]:
# treat stars as a categorical variable and look for differences between groups
yelp.groupby('stars').mean()


Out[7]:
cool useful funny
stars
1 0.576769 1.604806 1.056075
2 0.719525 1.563107 0.875944
3 0.788501 1.306639 0.694730
4 0.954623 1.395916 0.670448
5 0.944261 1.381780 0.608631

In [39]:
# correlation matrix
%matplotlib inline
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt

sns.set(style="white")

#create correlation matrix 
corr = yelp.corr()


# Generate a mask for the upper triangle
mask = np.zeros_like(corr, dtype=np.bool)
mask[np.triu_indices_from(mask)] = True

# Set up the matplotlib figure
f, ax = plt.subplots(figsize=(11, 8))

# Generate a custom diverging colormap
cmap = sns.diverging_palette(220, 10, as_cmap=True)

# Draw the heatmap with the mask and correct aspect ratio
sns.heatmap(corr, cmap=cmap, vmax=.3,
            square=True, linewidths=.5, 
            cbar_kws={"shrink": .5})


#sns.heatmap(yelp.corr())


Out[39]:
<matplotlib.axes._subplots.AxesSubplot at 0x2b488f28>

In [38]:
# multiple scatter plots

sns.set(style="white")
sns.pairplot(yelp, x_vars=['cool', 'useful', 'funny'], y_vars='stars', size=6, aspect=0.7, kind='reg', markers = '+')


Out[38]:
<seaborn.axisgrid.PairGrid at 0x2b0672b0>

Task 3

Define cool/useful/funny as the features, and stars as the response.


In [40]:
feature_cols = ['cool', 'useful', 'funny']
x = yelp[feature_cols]
y = yelp.stars

#feature_cols = ['cool', 'useful', 'funny']
#X = yelp[feature_cols]
#y = yelp.stars

Task 4

Fit a linear regression model and interpret the coefficients. Do the coefficients make intuitive sense to you? Explore the Yelp website to see if you detect similar trends.


In [49]:
#instantiate the linear regression model
from sklearn.linear_model import LinearRegression

#store the regression model function as a variable called 'linreg'
#we specify that we want to fit a model that includes an intercept, because we don't assume the data has been centered
linreg = LinearRegression(fit_intercept=True)

#fit a linear regression model
linreg.fit(x, y)

#find coeffecient of determination aka R2 and store it as a variable
#result = 0.044
r2 = linreg.score(x, y, sample_weight=None)

#create a list of lists that shows the independent variables and their coeffecients 
#note: need to see whether this is a b weight or a beta weight and how to look at significance of the coeffecients
coef = zip(feature_cols, linreg.coef_)
print(r2)
print(coef)
print(linreg.intercept_)

#overall votes is a poor predictor of stars, sentiment is likely to be a better predictor


0.044272856242
[('cool', 0.27435946858859295), ('useful', -0.14745239099400873), ('funny', -0.13567449053705782)]

Task 5

Evaluate the model by splitting it into training and testing sets and computing the RMSE. Does the RMSE make intuitive sense to you?


In [53]:
from sklearn.cross_validation import train_test_split
from sklearn import metrics
import numpy as np

In [54]:
# define a function that accepts a list of features and returns testing RMSE
def train_test_rmse(feature_cols):
    X = yelp[feature_cols]
    y = yelp.stars
    X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1)
    linreg = LinearRegression()
    linreg.fit(X_train, y_train)
    y_pred = linreg.predict(X_test)
    return np.sqrt(metrics.mean_squared_error(y_test, y_pred))

In [55]:
# calculate RMSE with all three features
train_test_rmse(['cool', 'useful', 'funny'])


Out[55]:
1.1842905282165919

Task 6

Try removing some of the features and see if the RMSE improves.


In [15]:
print train_test_rmse(['cool', 'useful'])
print train_test_rmse(['cool', 'funny'])
print train_test_rmse(['useful', 'funny'])


1.19623908761
1.19426732565
1.20982720239

Task 7 (Bonus)

Think of some new features you could create from the existing data that might be predictive of the response. Figure out how to create those features in Pandas, add them to your model, and see if the RMSE improves.


In [57]:
# new feature: review length (number of characters)
yelp['length'] = yelp.text.apply(len)

In [58]:
# new features: whether or not the review contains 'love' or 'hate'
yelp['love'] = yelp.text.str.contains('love', case=False).astype(int)
yelp['hate'] = yelp.text.str.contains('hate', case=False).astype(int)

In [59]:
# add new features to the model and calculate RMSE
train_test_rmse(['cool', 'useful', 'funny', 'length', 'love', 'hate'])


Out[59]:
1.1584039830984094

Task 8 (Bonus)

Compare your best RMSE on the testing set with the RMSE for the "null model", which is the model that ignores all features and simply predicts the mean response value in the testing set.


In [61]:
# split the data (outside of the function)
x_train, x_test, y_train, y_test = train_test_split(x, y, random_state=1)

In [62]:
# create a NumPy array with the same shape as y_test
y_null = np.zeros_like(y_test, dtype=float)

In [63]:
# fill the array with the mean of y_test
y_null.fill(y_test.mean())

In [67]:
# calculate null RMSE
import numpy as np
print np.sqrt(metrics.mean_squared_error(y_test, y_null))


  File "<ipython-input-67-150f3866bdb6>", line 3
    print np.sqrt(metrics.mean_squared_error(y_test, y_null))
           ^
SyntaxError: invalid syntax

Task 9 (Bonus)

Instead of treating this as a regression problem, treat it as a classification problem and see what testing accuracy you can achieve with KNN.


In [68]:
# import and instantiate KNN
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier(n_neighbors=50)

In [69]:
# classification models will automatically treat the response value (1/2/3/4/5) as unordered categories
knn.fit(X_train, y_train)
y_pred_class = knn.predict(X_test)
print metrics.accuracy_score(y_test, y_pred_class)


  File "<ipython-input-69-f0c369c5a5e3>", line 4
    print metrics.accuracy_score(y_test, y_pred_class)
                ^
SyntaxError: invalid syntax

Task 10 (Bonus)

Figure out how to use linear regression for classification, and compare its classification accuracy with KNN's accuracy.


In [73]:
# use linear regression to make continuous predictions
linreg = LinearRegression()
linreg.fit(x_train, y_train)
y_pred = linreg.predict(x_test)

In [74]:
# round its predictions to the nearest integer
y_pred_class = y_pred.round()

In [75]:
# calculate classification accuracy of the rounded predictions
print metrics.accuracy_score(y_test, y_pred_class)


  File "<ipython-input-75-9c9bc17e424a>", line 2
    print metrics.accuracy_score(y_test, y_pred_class)
                ^
SyntaxError: invalid syntax

In [ ]: